AIBase
Home
AI NEWS
AI Tools
AI Models
MCP
AI Services
AI Compute
AI Tutorial
EN

AI News

View More

The Hidden Bias of Large Models: How Dialect Discrimination Quietly Emerges

Language models have shown widespread applications across various fields, from education to legal consulting, and even in predicting medical risks. However, as these models gain more weight in decision-making processes, they may unintentionally reflect the biases present in the human training data, exacerbating discrimination against minority groups. Research has found that language models exhibit implicit racism, particularly in their treatment of African American English (AAE), demonstrating harmful dialect discrimination that is more negative than any known stereotypes against African Americans. The 'masking disguise' method was used to compare AAE with Standard American English.

9.2k 3 days ago
The Hidden Bias of Large Models: How Dialect Discrimination Quietly Emerges

Models

View More

qwen3-max

Alibaba

qwen3-max

$6

Input tokens/M

$24

Output tokens/M

256

Context Length

AIBase
Empowering the future, your artificial intelligence solution think tank
English简体中文繁體中文にほんご
FirendLinks:
AI Newsletters AI ToolsMCP ServersAI NewsAIBaseLLM LeaderboardAI Ranking
© 2026AIBase
Business CooperationSite Map